180 research outputs found

    A scalable packetised radio astronomy imager

    Get PDF
    Includes bibliographical referencesModern radio astronomy telescopes the world over require digital back-ends. The complexity of these systems depends on many site-specific factors, including the number of antennas, beams and frequency channels and the bandwidth to be processed. With the increasing popularity for ever larger interferometric arrays, the processing requirements for these back-ends have increased significantly. While the techniques for building these back-ends are well understood, every installation typically still takes many years to develop as the instruments use highly specialised, custom hardware in order to cope with the demanding engineering requirements. Modern technology has enabled reprogrammable FPGA-based processing boards, together with packet-based switching techniques, to perform all the digital signal processing requirements of a modern radio telescope array. The various instruments used by radio telescopes are functionally very different, but the component operations remain remarkably similar and many share core functionalities. Generic processing platforms are thus able to share signal processing libraries and can acquire different personalities to perform different functions simply by reprogramming them and rerouting the data appropriately. Furthermore, Ethernet-based packet-switched networks are highly flexible and scalable, enabling the same instrument design to be scaled to larger installations simply by adding additional processing nodes and larger network switches. The ability of a packetised network to transfer data to arbitrary processing nodes, along with these nodes' reconfigurability, allows for unrestrained partitioning of designs and resource allocation. This thesis describes the design and construction of the first working radio astronomy imaging instrument hosted on Ethernet-interconnected re- programmable FPGA hardware. I attempt to establish an optimal packetised architecture for the most popular instruments with particular attention to the core array functions of correlation and beamforming. Emphasis is placed on requirements for South Africa's MeerKAT array. A demonstration system is constructed and deployed on the KAT-7 array, MeerKAT's prototype. This research promises reduced instrument development time, lower costs, improved reliability and closer collaboration between telescope design teams

    On The Integral Coding Advantage In Unit Combination Networks

    Get PDF
    Network coding is a networking paradigm which allows network nodes to combine different pieces of data at various steps in the transmission rather than simply copying and forwarding the data. Network coding has various applications, and can be used to increase throughput, routing efficiency, robustness, and security. The original benefit that was demonstrated was improving the allowable transmission rate for a multicast session, and this application has been the focus of much research. One important parameter, the coding advantage, is the ratio of throughput with network coding to that without. The multicast networks that have a non-trivial coding advantage (i.e., coding advantage greater than 1) all seem to contain a substructure called the combination network which has a source, n relay nodes, and n k receivers in which each receiver is adjacent to a unique subset of k relay nodes. The coding advantage in combination networks has previously been determined for networks with fractional routing. In this paper, we address integral routing, which is more appropriate for networks (like optical wavelength-division-multiplexing networks) which allow only coarse-grained subdivision of the available bandwidth on any given channel. We give exact formulas for the integral coding advantage in both directed and undirected networks. For directed networks, we show that the coding advantage is k=n nk+1. For undirected networks, we show that the coding advantage is k=(k1). The latter result ts with conjectures that the integral coding advantage in any undirected network is bounded above by 2

    First Data On Aquaculture of the Tripletail, \u3ci\u3eLobotes surinamensis\u3c/i\u3e, a Promising Candidate Species For U.S. Marine Aquaculture

    Get PDF
    The Tripletail, Lobotes surinamensis, is a warm-water pelagic fish that is increasingly targeted by U.S. anglers. The superior quality of Tripletail flesh coupled with the lack of domestic commercial fisheries stimulated interests to develop aquaculture of this species. In this work, photo-thermal conditioning of captive-held broodstocks promoted maturation in females, but spontaneous spawning was not observed. GnRHa slow-release implants induced ovulation in late vitellogenic females but fertility remained below 10% when GnRHa was administered alone. However, spawns with high fertility (up to 85%) were obtained when a dopamine antagonist was administered in conjunction with GnRHa implants indicating dopamine inhibition impaired final gamete maturation, in particular sperm production in males, in aquaculture conditions. Tripletail larvae successfully initiated exogenous feeding on enriched rotifers followed by Artemia nauplii and were weaned to prepared feeds at 25 days post hatch, yet with low survival through the late phases of larval culture. Pilot grow-out trials at low density in recirculating systems revealed impressive growth rates averaging over 170 g/month through a market size above 1 kg. While protocols for hatchery culture and grow-out still need to be optimized, current data suggest that Tripletail could become a successful species for U.S. marine aquaculture

    A Scalable Correlator Architecture Based on Modular FPGA Hardware, Reuseable Gateware, and Data Packetization

    Full text link
    A new generation of radio telescopes is achieving unprecedented levels of sensitivity and resolution, as well as increased agility and field-of-view, by employing high-performance digital signal processing hardware to phase and correlate large numbers of antennas. The computational demands of these imaging systems scale in proportion to BMN^2, where B is the signal bandwidth, M is the number of independent beams, and N is the number of antennas. The specifications of many new arrays lead to demands in excess of tens of PetaOps per second. To meet this challenge, we have developed a general purpose correlator architecture using standard 10-Gbit Ethernet switches to pass data between flexible hardware modules containing Field Programmable Gate Array (FPGA) chips. These chips are programmed using open-source signal processing libraries we have developed to be flexible, scalable, and chip-independent. This work reduces the time and cost of implementing a wide range of signal processing systems, with correlators foremost among them,and facilitates upgrading to new generations of processing technology. We present several correlator deployments, including a 16-antenna, 200-MHz bandwidth, 4-bit, full Stokes parameter application deployed on the Precision Array for Probing the Epoch of Reionization.Comment: Accepted to Publications of the Astronomy Society of the Pacific. 31 pages. v2: corrected typo, v3: corrected Fig. 1
    • …
    corecore